- Level up your PS5 with this PlayStation VR2 bundle for $250 off before Black Friday
- The 45+ best Black Friday phone deals 2024: Sales on iPhones, Samsung, and more
- I recommend this 15-inch MacBook Air to most people, and it's $255 off for Black Friday
- The 40+ best Black Friday PlayStation 5 deals 2024: Deals available now
- Traditional EDR won't cut it: why you need zero trust endpoint security
Arista financials offer glimpse of AI network development
AI cluster networking speeds are expected to grow from 200/400/800 Gbps today to over 1 Tbps in the near future, according to Sameh Boujelbene, vice president for ethernet switch market research at Dell’Oro Group.
Dell Oro forecasts that by 2025, the majority of ports in AI networks will be 800 Gbps, and by 2027, the majority of ports will be 1600 Gbps, showing a very fast adoption of the highest speeds available in the market. “This pace of migration is almost twice as fast as what we usually see in the traditional front-end network that is used to connect general-purpose servers,” Boujelbene stated in a recent report.
Arista believes it has a strong, three-pronged approach to grow networking speeds as needed and take advantage of the current growth in AI communications capabilities. Three key products – the Arista 7700 R4 Distributed Etherlink Switch, the 7800 R4 Spine switch, and the 7600X6 Leaf – are all in production and support 800GB as well as 400GB optical links.
Facebook’s parent company, Meta Platforms, helped develop the 7700 and recently said it would be deploying the Etherlink switch in its Disaggregated Scheduled Fabric (DSF), which features a multi-tier network that supports around 100,000 DPUs, according to reports. The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture.
The 7060X6 AI Leaf switch features Broadcom Tomahawk 5 silicon with a capacity of 51.2 Tbps and support for 64 800G or 128 400G Ethernet ports, and the 7800R4 AI Spine utilizes Broadcom Jericho3-AI processors with an AI-optimized packet pipeline and supports up to 460 Tbps in a single chassis, which corresponds to 576 800G or 1152 400G Ethernet ports.
“This broad range of Ethernet platforms allows our customers to optimize density and minimize tiers to best match the requirements of their AI work,” said John McCool, Arista senior vice president and chief platform officer, during the financial call.